ai risk management framework
Application of the NIST AI Risk Management Framework to Surveillance Technology
Swaminathan, Nandhini, Danks, David
This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF. Surveillance technologies are increasingly widespread in both public and private spaces, often being developed and deployed with little engagement from relevant stakeholders. Most notably, the individuals subject to the surveillance technology are rarely included in creating that technology. As an illustration of both prominence and controversy, one may consider the AI system developed by Clearview AI Inc. to monitor and record the activities of individuals and groups, including rapid face identification. Their system has come under close scrutiny for the ways that the organization scraped images and training data from the Internet; the company is currently under investigation in multiple jurisdictions for scraping billions of images from social media sites without users' consent [1, 2], and other companies like Facebook, Twitter, Venmo, and Google have issued cease and desist letters citing violations of their terms of service [3].
- North America > United States > California > San Diego County > San Diego (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- North America > United States > New Jersey (0.04)
- (6 more...)
White House lays out its AI damage control plan - and KAMALA HARRIS will be program's czar
The White House has unveiled its plan to crack down on the AI race amid growing concerns it could upend life as we know it. The Biden Administration said the technology was'one of the most powerful' of our time, adding: 'but in order to seize the opportunities it presents, we must first mitigate its risks.' The plan is to launch 25 research institutes across the US that will seek assurance from four companies, including Google, Microsoft and ChatGPT's creator OpenAI, which will'participate in a public evaluation.' Many of the world's best minds have warned about the dangers of AI, specifically that it could destroy humanity if an assessment of risk is not done now. Tech giants like Elon Musk fear AI will soon surpass human intelligence and has independent thinking.
- Asia > Russia (0.15)
- North America > United States > California (0.06)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.90)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.76)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.60)
Can NIST move 'trustworthy AI' forward with new draft of AI risk management framework?
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Is your AI trustworthy or not? As the adoption of AI solutions increases across the board, consumers and regulators alike expect greater transparency over how these systems work. Today's organizations not only need to be able to identify how AI systems process data and make decisions to ensure they are ethical and bias-free, but they also need to measure the level of risk posed by these solutions.
There's More to AI Bias Than Biased Data, NIST Report Highlights
As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases -- beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. The recommendation is a core message of a revised NIST publication, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which reflects public comments the agency received on its draft version released last summer. As part of a larger effort to support the development of trustworthy and responsible AI, the document offers guidance connected to the AI Risk Management Framework that NIST is developing. According to NIST's Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used. "Context is everything," said Schwartz, principal investigator for AI bias and one of the report's authors.
FLI October 2021 Newsletter - Future of Life Institute
FLI engages on AI Risk Management Framework in the US FLI continues to advise the National Institute of Standards and Technology (NIST) in their development of guidance on artificial intelligence, including in the critically important AI Risk Management Framework. Our latest comments from this month on the Risk Management Framework raised numerous policy issues, including the need for NIST to account for aggregate risks from low probability, high consequence effects of AI systems, and the need to proactively ensure the alignment of evermore powerful advanced or general AI systems.
Artificial intelligence - do you know the risks?
As part of developing an AI Risk Management Framework, the US National Institute for Science and Technology (NIST) has published a draft report identifying three main categories of risk of which those designing, developing and using AI should be aware. This refers to the reliability, accuracy and robustness of the systems being used. Most relevant to AI developers and designers, evaluation criteria should be used to assess accuracy and sources of error. Particular consideration should be exercised when applying AI to new data, and deemed standards of safety and determinations of security must be addressed. In general, assessments of an AI system or a decision deriving from AI ought to be able to be scrutinised by a human.